32 research outputs found

    Zoom-in-Net: Deep Mining Lesions for Diabetic Retinopathy Detection

    Full text link
    We propose a convolution neural network based algorithm for simultaneously diagnosing diabetic retinopathy and highlighting suspicious regions. Our contributions are two folds: 1) a network termed Zoom-in-Net which mimics the zoom-in process of a clinician to examine the retinal images. Trained with only image-level supervisions, Zoomin-Net can generate attention maps which highlight suspicious regions, and predicts the disease level accurately based on both the whole image and its high resolution suspicious patches. 2) Only four bounding boxes generated from the automatically learned attention maps are enough to cover 80% of the lesions labeled by an experienced ophthalmologist, which shows good localization ability of the attention maps. By clustering features at high response locations on the attention maps, we discover meaningful clusters which contain potential lesions in diabetic retinopathy. Experiments show that our algorithm outperform the state-of-the-art methods on two datasets, EyePACS and Messidor.Comment: accepted by MICCAI 201

    Combining Fine- and Coarse-Grained Classifiers for Diabetic Retinopathy Detection

    Full text link
    Visual artefacts of early diabetic retinopathy in retinal fundus images are usually small in size, inconspicuous, and scattered all over retina. Detecting diabetic retinopathy requires physicians to look at the whole image and fixate on some specific regions to locate potential biomarkers of the disease. Therefore, getting inspiration from ophthalmologist, we propose to combine coarse-grained classifiers that detect discriminating features from the whole images, with a recent breed of fine-grained classifiers that discover and pay particular attention to pathologically significant regions. To evaluate the performance of this proposed ensemble, we used publicly available EyePACS and Messidor datasets. Extensive experimentation for binary, ternary and quaternary classification shows that this ensemble largely outperforms individual image classifiers as well as most of the published works in most training setups for diabetic retinopathy detection. Furthermore, the performance of fine-grained classifiers is found notably superior than coarse-grained image classifiers encouraging the development of task-oriented fine-grained classifiers modelled after specialist ophthalmologists.Comment: Pages 12, Figures

    Learned Pre-Processing for Automatic Diabetic Retinopathy Detection on Eye Fundus Images

    Full text link
    Diabetic Retinopathy is the leading cause of blindness in the working-age population of the world. The main aim of this paper is to improve the accuracy of Diabetic Retinopathy detection by implementing a shadow removal and color correction step as a preprocessing stage from eye fundus images. For this, we rely on recent findings indicating that application of image dehazing on the inverted intensity domain amounts to illumination compensation. Inspired by this work, we propose a Shadow Removal Layer that allows us to learn the pre-processing function for a particular task. We show that learning the pre-processing function improves the performance of the network on the Diabetic Retinopathy detection task.Comment: Accepted to International Conference on Image Analysis and Recognition ICIAR 2019 Published at https://doi.org/10.1007/978-3-030-27272-2_3

    Comparison of Local Analysis Strategies for Exudate Detection in Fundus Images

    Full text link
    Diabetic Retinopathy (DR) is a severe and widely spread eye disease. Exudates are one of the most prevalent signs during the early stage of DR and an early detection of these lesions is vital to prevent the patient’s blindness. Hence, detection of exudates is an important diagnostic task of DR, in which computer assistance may play a major role. In this paper, a system based on local feature extraction and Support Vector Machine (SVM) classification is used to develop and compare different strategies for automated detection of exudates. The main novelty of this work is allowing the detection of exudates using non-regular regions to perform the local feature extraction. To accomplish this objective, different methods for generating superpixels are applied to the fundus images of E-OPHTA database and texture and morphological features are extracted for each of the resulting regions. An exhaustive comparison among the proposed methods is also carried out.This paper was supported by the European Union’s Horizon 2020 research and innovation programme under the Project GALAHAD [H2020-ICT2016-2017, 732613]. The work of Adri´an Colomer has been supported by the Spanish Government under a FPI Grant [BES-2014-067889]. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.Pereira, J.; Colomer, A.; Naranjo Ornedo, V. (2018). Comparison of Local Analysis Strategies for Exudate Detection in Fundus Images. En Intelligent Data Engineering and Automated Learning – IDEAL 2018. Springer. 174-183. https://doi.org/10.1007/978-3-030-03493-1_19S174183Sidibé, D., Sadek, I., Mériaudeau, F.: Discrimination of retinal images containing bright lesions using sparse coded features and SVM. Comput. Biol. Med. 62, 175–184 (2015)Zhou, W., Wu, C., Yi, Y., Du, W.: Automatic detection of exudates in digital color fundus images using superpixel multi-feature classification. IEEE Access 5, 17077–17088 (2017)Sinthanayothin, C., et al.: Automated detection of diabetic retinopathy on digital fundus images. Diabet. Med. 19(2), 105–112 (2002)Walter, T., Klein, J.C., et al.: A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Trans. Med. Imaging 21(10), 1236–1243 (2002)Ali, S., et al.: Statistical atlas based exudate segmentation. Comput. Med. Imaging Graph. 37(5–6), 358–368 (2013)Zhang, X., Thibault, G., Decencière, E., Marcotegui, B., et al.: Exudate detection in color retinal images for mass screening of diabetic retinopathy. Med. Image Anal. 18(7), 1026–1043 (2014)Li, H., Chutatape, O.: Automated feature extraction in color retinal images by a model based approach. IEEE Trans. Biomed. Eng. 51(2), 246–254 (2004)Welfer, D., Scharcanski, J., Marinho, D.R.: A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Comput. Med. Imaging Graph. 34(3), 228–235 (2010)Giancardo, L., et al.: Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Med. Image Anal. 16(1), 216–226 (2012)Amel, F., Mohammed, M., Abdelhafid, B.: Improvement of the hard exudates detection method used for computer-aided diagnosis of diabetic retinopathy. Int. J. Image Graph. Signal Process. 4(4), 19 (2012)Akram, M.U., Khalid, S., Tariq, A., Khan, S.A., Azam, F.: Detection and classification of retinal lesions for grading of diabetic retinopathy. Comput. Biol. Med. 45, 161–171 (2014)Akram, M.U., Tariq, A., Khan, S.A., Javed, M.Y.: Automated detection of exudates and macula for grading of diabetic macular edema. Comput. Methods Programs Biomed. 114(2), 141–152 (2014)Machairas, V.: Waterpixels and their application to image segmentation learning. Ph.D. thesis, Université de recherche Paris Sciences et Lettres (2016)Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)Veksler, O., Boykov, Y., Mehrani, P.: Superpixels and supervoxels in an energy optimization framework. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6315, pp. 211–224. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-15555-0_16Comaniciu, D., Meer, P.: Mean shift: a robust approach toward feature space analysis. IEEE Trans. Pattern Anal. Mach. Intell. 24(5), 603–619 (2002)Levinshtein, A., Stere, A., Kutulakos, K.N., Fleet, D.J., Dickinson, S.J., Siddiqi, K.: TurboPixels: fast superpixels using geometric flows. IEEE Trans. Pattern Anal. Mach. Intell. 31(12), 2290–2297 (2009)Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)Machairas, V., Faessel, M., Cárdenas-Peña, D., Chabardes, T., Walter, T., Decencière, E.: Waterpixels. IEEE Trans. Image Process. 24(11), 3707–3716 (2015)Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 24(7), 971–987 (2002)Guo, Z., Zhang, L., Zhang, D.: Rotation invariant texture classification using LBP variance (LBPV) with global matching. Pattern Recognit. 43(3), 706–719 (2010)Morales, S., Naranjo, V., Angulo, J., Alcañiz, M.: Automatic detection of optic disc based on PCA and mathematical morphology. IEEE Trans. Med. Imaging 32(4), 786–796 (2013)Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)Decencière, E., Cazuguel, G., Zhang, X., Thibault, G., Klein, J.C., Meyer, F., et al.: TeleOphta: machine learning and image processing methods for teleophthalmology. IRBM 34(2), 196–203 (2013)DErrico, J.: inpaint\_nans, matlab central file exchange (2004). http://kr.mathworks.com/matlabcentral/fileexchange/4551-inpaint-nans . Accessed 13 Aug 201

    The strong gravitational lens finding challenge

    Get PDF
    Large-scale imaging surveys will increase the number of galaxy-scale strong lensing candidates by maybe three orders of magnitudes beyond the number known today. Finding these rare objects will require picking them out of at least tens of millions of images, and deriving scientific results from them will require quantifying the efficiency and bias of any search method. To achieve these objectives automated methods must be developed. Because gravitational lenses are rare objects, reducing false positives will be particularly important. We present a description and results of an open gravitational lens finding challenge. Participants were asked to classify 100 000 candidate objects as to whether they were gravitational lenses or not with the goal of developing better automated methods for finding lenses in large data sets. A variety of methods were used including visual inspection, arc and ring finders, support vector machines (SVM) and convolutional neural networks (CNN). We find that many of the methods will be easily fast enough to analyse the anticipated data flow. In test data, several methods are able to identify upwards of half the lenses after applying some thresholds on the lens characteristics such as lensed image brightness, size or contrast with the lens galaxy without making a single false-positive identification. This is significantly better than direct inspection by humans was able to do. Having multi-band, ground based data is found to be better for this purpose than single-band space based data with lower noise and higher resolution, suggesting that multi-colour data is crucial. Multi-band space based data will be superior to ground based data. The most difficult challenge for a lens finder is differentiating between rare, irregular and ring-like face-on galaxies and true gravitational lenses. The degree to which the efficiency and biases of lens finders can be quantified largely depends on the realism of the simulated data on which the finders are trained
    corecore